Product · Phase 01 of 04

AI-Ready
Requirements

Three principles, seven fields, and a prompt library that makes your requirements consumable by every AI tool in the pipeline — without rewriting them for each one.

What "AI-Ready" Actually Means

Three principles that separate requirements AI can act on from requirements AI will misinterpret.

Explicit over Implicit

If a requirement relies on a human to "just know" context that isn't written down, it's not AI-ready. AI tools can't read minds or absorb organizational history through osmosis. If the context isn't in the document, it isn't available to the tool — or the developer reading it six months from now.

Behavioral over Aspirational

If it describes a quality — "user-friendly," "intuitive," "performant" — without defining observable behavior, it's not AI-ready. Describe what happens: what the system displays, calculates, validates, or prevents. "Fast" means nothing to a code generator. "Responds within 500ms" means everything.

Decomposed over Monolithic

If it bundles five workflows into one paragraph, it's not AI-ready. AI code generation tools work best with single-responsibility requirements. A requirement that covers enrollment tracking, audit logging, and export formatting in one story will produce code that conflates all three — difficult to test and impossible to iterate.

The difference these principles create isn't subtle. Here's the same requirement written both ways:

Not AI-Ready
The system should handle customer data securely.

This is aspirational and implicit. "Securely" requires the reader to know what security means in a SOC 2 Type II and PCI-DSS regulated financial environment — and AI tools do not. An AI code generator handed this requirement will produce something that feels secure and is not.

AI-Ready
The system must encrypt all PII fields (name, DOB, SSN, account number) using AES-256 at rest and TLS 1.2+ in transit, and must not write PII to application logs.

This is behavioral, explicit, and decomposed. Every clause is directly actionable by Copilot and Claude. The audit logger, the encryption layer, and the log sanitizer are three separately testable requirements — all in one sentence.

The AI-Ready Story Template

Seven fields. Each one feeds a different part of the AI toolchain.

Field 01

User Story

As a [specific role + context], I need to [action] so that [measurable outcome]. The role must be specific enough that it defines constraints — "Loan Officer reviewing a multi-branch commercial lending portfolio" generates very different requirements than "user."

Field 02

Acceptance Criteria

Given / When / Then — covering the happy path, boundary conditions, and error states. Testable means a developer can answer "pass" or "fail" without interpretation. If it requires judgment to evaluate, it needs to be rewritten.

Field 03

Data Context

Inputs and outputs: field names, data types, validation rules, sources, and calculated fields with their formulas. Copilot generates data models and validation logic from this field — without it, it guesses, and gets it wrong in ways that are expensive to fix in code review.

Field 04

UI/UX Behavioral Description

Layout, interactions, and every named state: loading, empty, success, error. "A sortable table" is not a behavioral description. "A table that re-sorts within 500ms on column header click without a full page reload" is. Figma Make and Claude Design produce usable starting points from the second kind.

Field 05

Edge Cases & Error States

Boundary conditions, timeout scenarios, concurrent users, and the data states that most stakeholders don't think about until QA finds them. Document these explicitly — AI test generation tools produce tests for exactly what you write here and nothing more.

Field 06

Compliance & Regulatory

SOC 2 Type II and PII handling rules, audit trail requirements, data retention policies, and accessibility standards (WCAG 2.1 AA minimum for PCI-DSS environments). AI tools will not infer regulatory requirements. If it isn't in the requirement, it won't be in the code.

Field 07

Dependencies & Integrations

Upstream systems providing data, downstream consumers reading the output, APIs, shared data contracts, and feature flags. Claude used for architecture analysis treats this field as the map of the system — missing entries mean generated components that conflict with real integrations at the worst possible time.

Most of this you already know — it's just not written down. And once it is, AI tools can do remarkable things with it. The template is not about adding overhead. It's about making implicit knowledge explicit in a form that travels further than your memory.
Tips & Tricks

If a story contains multiple “and” clauses, split it before you hand it to AI tooling. One behavior per requirement produces cleaner code generation, cleaner test coverage, and cleaner traceability.

Why Each Field Matters to the AI Toolchain

Each template field has a direct, traceable connection to a specific AI tool failure mode.

Template Field AI Tool It Feeds What Happens Without It
User Story + Acceptance Criteria Copilot, Claude (code gen + test gen) AI generates code that doesn't match intended behavior. Tests become impossible to auto-generate because there's no behavioral specification to derive them from.
Data Context Copilot (data models, validation) Copilot guesses at data types and gets them wrong. Developers manually translate field names and validation rules into code — the exact repetitive work AI should be handling.
UI/UX Behavioral Description Figma Make, Claude Design "A form" produces a generic form. Detailed behavioral descriptions — states, interactions, column behavior, sort order — produce usable starting points for design that require iteration, not reconstruction.
Edge Cases & Error States Claude (test generation) AI test generation tools derive test scenarios from what you've written. No edge cases means no edge case tests. QA finds them instead, in staging, at the end of the sprint.
Compliance & Regulatory Notes Copilot, Claude (code gen) AI tools will not infer SOC 2 Type II requirements, audit trail obligations, or PII handling rules. If AES-256 encryption and export audit logging are not specified, they will not appear in generated code.
Dependencies & Integrations Claude (architecture analysis) AI generates components in isolation that conflict with existing API contracts, miss integration points, or duplicate data that already lives upstream — technical debt introduced before the first commit.
The pattern you'll notice

In every case, a missing field doesn't cause the AI tool to fail visibly — it causes it to succeed silently at the wrong thing. The output looks plausible. It compiles. It's wrong. The cost of that wrongness accumulates fastest when the requirement gets to Copilot, because code that looks correct is harder to audit than a design that looks off.

Writing for AI — Language That Works

The vocabulary you use determines whether AI tools can act on your requirements or have to interpret them.

Use These Patterns

  • Concrete verbs: display, calculate, validate, submit, navigate, filter, sort, disable, prevent, redirect. These map directly to code paths.
  • Specific quantities: "within 3 seconds," "maximum 50 characters," "at least one required," "paginate at 50 rows." Every number is a test case.
  • Named states: active, inactive, pending, expired, locked, draft, loading, empty, error. Named states become component state variables.
  • Defined roles: "Loan Officer managing LOAN-2024-Q3 across 6 branches," "Compliance Officer with read-only access" — not "user." The role constrains the requirement meaningfully.
  • Explicit conditionals: "If [condition], then [behavior]." Every conditional is a branch in the code and a test case in the acceptance criteria.

Avoid These Patterns

  • Subjective adjectives: intuitive, user-friendly, fast, modern, clean, simple. These describe a feeling, not a behavior. They're unverifiable and untestable.
  • Implied knowledge: "the usual workflow," "standard compliance," "as per the existing process." If it isn't documented here, it doesn't exist to an AI tool.
  • Compound requirements: "the system should do X and also Y and also Z" — these are three requirements. Split them so each can be tested, prioritized, and built independently.
  • Ambiguous pronouns: "it should update," "they should see" — what is "it"? Which "they"? Every ambiguous pronoun is a place where AI tools guess, and developers debate in code review.
  • Passive voice obligations: "data should be stored" — by what? When? For how long? Passive voice hides the actor and the trigger. Make both explicit.
Use Claude to audit your language before you finalize

Before locking a requirement, paste it into Claude with this prompt: "Review this requirement and flag any subjective adjectives, ambiguous pronouns, implied knowledge, or compound statements that should be split into separate requirements." It takes 30 seconds and catches the issues that become expensive when they reach the development sprint.

LOAN-2024-Q3 — Fully Structured

What a complete, AI-ready requirement looks like in practice — not abbreviated, not idealized.

This is the Loan Application Review Table requirement for portfolio LOAN-2024-Q3, written with all seven fields populated. This is the version you hand to Copilot for data model generation, Claude for test case derivation, and Figma Make for initial layout. No rewriting required per tool — the same document feeds all three.

Feature: Loan Application Review Table — Portfolio LOAN-2024-Q3 User Story: As a Loan Officer managing portfolio LOAN-2024-Q3 across 6 branches, I need to view application status for all branches ranked by risk level so that I can identify which branches need intervention before the weekly portfolio review call, without building a manual spreadsheet. Acceptance Criteria: Given I navigate to the application review dashboard, When the page loads, Then all active branches appear in a sortable table, defaulting to sort by approval risk (furthest from target first). Given a branch has 0 approved applications, When it appears in the table, Then the Status column shows "No Data" with an info icon, distinguishing it from branches that are actively behind target. Given I click a column header, When the sort is applied, Then the table re-sorts within 500ms without a full page reload. Given I click Print, When the print dialog opens, Then a print-optimized layout renders with: Branch ID, Branch Name, Target, Actual, % to Target, Risk Status, Last Updated — one branch per row, no pagination artifacts. Data Context: Inputs: Branch ID (string), Branch Name (string), Target Approvals (int), Actual Approvals (int), Application Date (ISO 8601), Last Updated (timestamp) Outputs: % to Target (calculated: Actual/Target × 100, round to 1 decimal), Risk Status (calculated: Behind if <70% at >50% of quarter duration, At Risk if 70–89%, On Track if ≥90%) Source: Loan Origination System API (Salesforce CRM) — endpoint TBD Refresh: On page load + manual refresh button; no real-time polling required UI/UX Behavioral Description: Layout: Full-width table, sticky header row, 7 columns. Status column uses icon + text label (never color alone — color-blind safe per WCAG 2.1 AA). Icons: Warning triangle (Behind), Caution circle (At Risk), Check (On Track). Interactions: Column sort (asc/desc toggle), global filter by Status (multi-select chip), export to CSV button (top-right). States: Loading — skeleton animation, 3 animated rows, matching column widths Empty — "No branches found for this portfolio" + refresh CTA Error — "Unable to load application data" + retry button + timestamp of last successful load Edge Cases & Error States: - Branch with Target Approvals = 0: show "TBD" in target column, exclude from risk calculation, show "Pending" status. - API timeout >10s: show error state with retry; preserve last known data if available and display "Last updated [timestamp]" banner. - Portfolio with >50 branches: paginate at 50 rows, show total count above table. - User has read-only role (Compliance Officer): hide export button entirely (no data exfiltration for read-only users per compliance policy — do not disable, remove). Compliance & Regulatory: - No PII in this view. Branch codes are anonymized. No audit trail required for read operations on this screen. - Export to CSV requires audit log entry: user ID, timestamp, portfolio ID, number of rows exported, applied filters at time of export. - Accessibility: WCAG 2.1 AA. All table headers must have scope attributes. Sort state must be announced to screen readers via aria-sort. - Color is never the sole indicator of status (deuteranopia/protanopia accommodation). Each status row uses icon + text label in addition to color. Dependencies & Integrations: - Salesforce CRM API (read — application data, branch metadata) - User role/permission service (read — to determine export button visibility) - Audit log service (write — export events only) - No upstream write dependencies — this is a read-only view - Downstream: no systems consume data from this screen directly
What this document enables downstream

Hand the Data Context to Copilot and ask it to generate a TypeScript interface and validation function — done in under two minutes. Paste the UI/UX section into Figma Make and ask for a layout starting point. Paste Acceptance Criteria plus Edge Cases into Claude and ask for a complete test suite. Same document, three tools, no rewriting. That's what AI-ready means.

Claude Prompts for Product Leads

Six ready-to-use prompts that cover the full requirements lifecycle — from intake through ADO formatting.

Intake Structuring
Here are my notes from a stakeholder meeting
about [topic]. Extract all requirements as user
stories with acceptance criteria using this
template: [paste template]. Flag any ambiguities
and list follow-up questions that need to be
answered before development can begin.
Requirement Review
Review this requirement for AI-readiness.
Identify ambiguities, missing edge cases,
implicit assumptions, and any areas where an
AI code generation tool would need to guess
at the intended behavior. Suggest specific
improvements for each issue you find.
Decomposition
This requirement is too broad for a single
sprint story. Break it into smaller,
independently deliverable user stories that
each describe a single user action. Each story
should be completable in one sprint and
testable in isolation from the others.
Test Generation
Given this requirement and acceptance criteria,
generate a comprehensive set of test scenarios
including: happy path, boundary cases, error
states, and edge cases. For each scenario,
include the Given/When/Then structure and the
expected pass/fail condition.
ADO / Helix Formatting
Format these requirements as Azure DevOps
work items with title, description, acceptance
criteria, and relevant tags. Also produce a
parallel version formatted for Helix ALM
import. Flag any fields that require manual
input before import (assigned to, sprint, etc.)
Meeting Transcript
I've attached a Teams meeting transcript (.vtt)
and our requirements template. Populate the
template from the transcript. Focus on
functional requirements, acceptance criteria,
and action items. For anything unclear in the
transcript, list it as an open question rather
than guessing at the intent.